Goto

Collaborating Authors

 main concern



A common concern from all reviewers is

Neural Information Processing Systems

We kindly thank the reviewers for their detailed reviews, valuable feedback and suggestions for improvement. Indeed, our proof of the new SW theorem relies on an "ordering" of the coordinates of arbitrary equivariant SW theorem under arbitrary finite group action would be desirable, however the proof is out of our reach as of today. In a way, this limitation is similar to the distinction between "point clouds" (which in We will add this discussion in the paper, and mention it in the abstract. In its "deep" original version, it covers all type of "Message-Passing" GNNs, but not spectral GNNs which use powers of the adjacency matrix. We will clarify this in the final version.


Partially Encrypted Machine Learning using Functional Encryption

Neural Information Processing Systems

We graciously thank the reviewers for their helpful comments. We clarify some details of the article below. In fact, this article shows that even if FE isn't as mature as homomorphic We do detail and reference many notions from cryptology. ML community may not be familiar with those new concepts, and we sought to introduce them carefully and rigorously. In return, classical notions of ML do not need to be referenced as much because they are well established.



reviewers ' main concern was about the need for additional baselines

Neural Information Processing Systems

We thank the reviewers for their constructive feedback. Reviewers found our method to be novel (rev. We also include the linear FastText baseline. Our new Table 3 reflects this comparison to standard models. We anticipate our performance to be competitive but somewhat lower (they report an LSD of 2.5; our RNN is better for presentation, and will do so in the paper.


fdf1bc5669e8ff5ba45d02fded729feb-AuthorFeedback.pdf

Neural Information Processing Systems

The reviewer brings up a good point. SNG-DBSCAN recovers these clusters at rates depending on various properties of the density function. We will further clarify these constant factor dependencies.


clarity recommendations the reviewers suggest, turning now to the main concerns of each reviewer

Neural Information Processing Systems

We thank the reviewers for their valuable feedback, which will improve the paper. Regarding the reviewer's comments about applications, we chose to limit the number of applications to three because Cauchy, which has unbounded variance), in contrast to our mechanisms. As requested, we will add a discussion about related work on lower bounds for private mechanisms. For the reviewer's main comment on the contributions of this paper with regard to Asi & Duchi 2020, we believe Such general (vector-valued) functions are the main focus of this submission. We thank the reviewer for bringing our attention to the Reimherr & A wan's K-norm mechanism (2019), which certainly We will discuss this work more carefully in the final version.


clarity recommendations the reviewers suggest, turning now to the main concerns of each reviewer

Neural Information Processing Systems

We thank the reviewers for their valuable feedback, which will improve the paper. Regarding the reviewer's comments about applications, we chose to limit the number of applications to three because Cauchy, which has unbounded variance), in contrast to our mechanisms. As requested, we will add a discussion about related work on lower bounds for private mechanisms. For the reviewer's main comment on the contributions of this paper with regard to Asi & Duchi 2020, we believe Such general (vector-valued) functions are the main focus of this submission. We thank the reviewer for bringing our attention to the Reimherr & A wan's K-norm mechanism (2019), which certainly We will discuss this work more carefully in the final version.


Partially Encrypted Machine Learning using Functional Encryption

Neural Information Processing Systems

We graciously thank the reviewers for their helpful comments. We clarify some details of the article below. In fact, this article shows that even if FE isn't as mature as homomorphic We do detail and reference many notions from cryptology. ML community may not be familiar with those new concepts, and we sought to introduce them carefully and rigorously. In return, classical notions of ML do not need to be referenced as much because they are well established.


General comments

Neural Information Processing Systems

We thank the reviewers for their insightful feedback. While we realize this may be subjective, even in this "oracle" setting We also notice similar patterns as Table 2 in the paper, e.g. this phenomenon is much more present in Dec-Dec attention, whereas Enc-Dec attention suffers much more from keeping only one head (-18.89 "only one head is sufficient" claim in the abstract, introduction and conclusion. We see two issues here. We will add this to the revised paper.